248 research outputs found

    Metastable states and quasicycles in a stochastic Wilson-Cowan\ud model of neuronal population dynamics

    Get PDF
    We analyze a stochastic model of neuronal population dynamics with intrinsic noise. In the thermodynamic limit N -> infinity, where N determines the size of each population, the dynamics is described by deterministic Wilson–Cowan equations. On the other hand, for finite N the dynamics is described by a master equation that determines the probability of spiking activity within each population. We first consider a single excitatory population that exhibits bistability in the deterministic limit. The steady–state probability distribution of the stochastic network has maxima at points corresponding to the stable fixed points of the deterministic network; the relative weighting of the two maxima depends on the system size. For large but finite N, we calculate the exponentially small rate of noise–induced transitions between the resulting metastable states using a Wentzel–Kramers–Brillouin (WKB) approximation and matched asymptotic expansions. We then consider a two-population excitatory/inhibitory network that supports limit cycle oscillations. Using a diffusion approximation, we reduce the dynamics to a neural Langevin equation, and show how the intrinsic noise amplifies subthreshold oscillations (quasicycles)

    Stochastic neural field theory and the system-size expansion

    Get PDF
    We analyze a master equation formulation of stochastic neurodynamics for a network of synaptically coupled homogeneous neuronal populations each consisting of N identical neurons. The state of the network is specified by the fraction of active or spiking neurons in each population, and transition rates are chosen so that in the thermodynamic or deterministic limit (N → ∞) we recover standard activity–based or voltage–based rate models. We derive the lowest order corrections to these rate equations for large but finite N using two different approximation schemes, one based on the Van Kampen system-size expansion and the other based on path integral methods. Both methods yield the same series expansion of the moment equations, which at O(1/N ) can be truncated to form a closed system of equations for the first and second order moments. Taking a continuum limit of the moment equations whilst keeping the system size N fixed generates a system of integrodifferential equations for the mean and covariance of the corresponding stochastic neural field model. We also show how the path integral approach can be used to study large deviation or rare event statistics underlying escape from the basin of attraction of a stable fixed point of the mean–field dynamics; such an analysis is not possible using the system-size expansion since the latter cannot accurately\ud determine exponentially small transitions

    Front propagation in stochastic neural fields

    Get PDF
    We analyse the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive–like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. One major result of our analysis is a comparison between freely propagating fronts and fronts locked to an externally moving stimulus. We show that the latter are much more robust to noise, since the stochastic wandering of the mean front profile is described by an Ornstein–Uhlenbeck process rather than a Wiener process, so that the variance in front position saturates in the long time limit rather than increasing linearly with time. Finally, we consider a stochastic neural field that supports a pulled front in the deterministic limit, and show that the wandering of such a front is now subdiffusive

    Stationary bumps in a piecewise smooth neural field model with synaptic depression

    Get PDF
    We analyze the existence and stability of stationary pulses or bumps in a one–dimensional piecewise smooth neural field model with synaptic depression. The continuum dynamics is described in terms of a nonlocal integrodifferential equation, in which the integral kernel represents the spatial distribution of synaptic weights between populations of neurons whose mean firing rate is taken to be a Heaviside function of local activity. Synaptic depression dynamically reduces the strength of synaptic weights in response to increases in activity. We show that in the case of a Mexican hat weight distribution, there exists a stable bump for sufficiently weak synaptic depression. However, as synaptic depression becomes stronger, the bump became unstable with respect to perturbations that shift the boundary of the bump, leading to the formation of a traveling pulse. The local stability of a bump is determined by the spectrum of a piecewise linear operator that keeps track of the sign of perturbations of the bump boundary. This results in a number of differences from previous studies of neural field models with Heaviside firing rate functions, where any discontinuities appear inside convolutions so that the resulting dynamical system is smooth. We also extend our results to the case of radially symmetric bumps in two–dimensional neural field models

    Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression

    Get PDF
    We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave

    Neural field model of binocular rivalry waves

    Get PDF
    We present a neural field model of binocular rivalry waves in visual cortex. For each eye we consider a one–dimensional network of neurons that respond maximally to a particular feature of the corresponding image such as the orientation of a grating stimulus. Recurrent connections within each one-dimensional network are assumed to be excitatory, whereas connections between the two networks are inhibitory (cross-inhibition). Slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We derive an analytical expression for the speed of a binocular rivalry wave as a function of various neurophysiological parameters, and show how properties of the wave are consistent with the wave–like propagation of perceptual dominance observed in recent psychophysical experiments. In addition to providing an analytical framework for studying binocular rivalry waves, we show how neural field methods provide insights into the mechanisms underlying the generation of the waves. In particular, we highlight the important role of slow adaptation in providing a “symmetry breaking mechanism” that allows waves to propagate

    The effects of noise on binocular rivalry waves: a stochastic neural field model

    Get PDF
    We analyse the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how multiplicative noise in the activity variables leads to a diffusive–like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. The multiplicative noise also renormalizes the mean speed of the wave. We use our analysis to calculate the first passage time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation leads to quenched disorder in the neural fields during propagation of a wave

    Random intermittent search and the tug-of-war model of motor-driven transport

    Get PDF
    We formulate the tug-of-war model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor-complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor-complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target somewhere on the microtubule track. Using a quasi-steady state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP)concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model), and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule(asymmetric tug-of-war model)

    A theory for the alignment of cortical feature maps during\ud development

    Get PDF
    We present a developmental model of ocular dominance column formation that takes into account the existence of an array of intrinsically specified cytochrome oxidase blobs. We assume that there is some molecular substrate for the blobs early in development, which generates a spatially periodic modulation of experience–dependent plasticity. We determine the effects of such a modulation on a competitive Hebbian mechanism for the modification of the feedforward afferents from the left and right eyes. We show how alternating left and right eye dominated columns can develop, in which the blobs are aligned with the centers of the ocular dominance columns and receive a greater density of feedforward connections, thus becoming defined extrinsically. More generally, our results suggest that the presence of periodically distributed anatomical markers early in development could provide a mechanism for the alignment of cortical feature maps

    Filling of a Poisson trap by a population of random intermittent searchers

    Get PDF
    We extend the continuum theory of random intermittent search processes to the case of NN independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi--infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to nn successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N1/N, we show that there exists a well--defined mean field limit N→∞N\rightarrow \infty, in which the stochastic model reduces to a deterministic system of linear reaction--hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time--dependent rate of filling λ(t)\lambda(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with nn particles in terms of the waiting time density fn(t)f_n(t). The latter is determined by the integrated Poisson rate ÎŒ(t)=∫0tλ(s)ds\mu(t)=\int_0^t\lambda(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection--diffusion equation using a quasi-steady-state analysis. We compare our analytical results for the mean--field model with Monte-Carlo simulations for finite NN. We thus determine how the mean first passage time (MFPT) for filling the target depends on NN and nn
    • 

    corecore